Skip to content

Conversation

@Elvis339
Copy link
Contributor

@Elvis339 Elvis339 commented Dec 3, 2025

Why this should be merged

Enables Firewood to track performance over time by running C-Chain reexecution benchmarks with custom Firewood builds. This establishes the infrastructure for catching performance regressions before they reach production.

ava-labs/firewood#1494

How this works

  1. Firewood triggers the existing C-Chain reexecution workflow via GitHub API with with-dependencies parameter
  2. Workflow sets up dependencies via task setup-reexecution-deps before calling the benchmark action
  3. The task uses polyrepo to sync Firewood at the specified ref
  4. Runs C-Chain reexecution benchmark with the custom build
  5. Uploads results as artifact for Firewood to download and track separately

The root action remains unaware of external dependency configuration - setup is handled by the workflow before invocation.

The same functionality is available locally for development:

nix develop
FIREWOOD_REF=abc123 ./scripts/run_task.sh setup-reexecution-deps
./scripts/run_task.sh test-cchain-reexecution -- firewood-101-250k

Changes

  • Add setup-reexecution-deps task for dependency setup (externalized from action)
  • Update workflows to call setup task before invoking benchmark action
  • Root action no longer accepts dependency refs (workflow responsibility)
  • Always upload benchmark artifacts (enables downstream tracking)
  • Update documentation in C-Chain Re-Execution README

How this was tested

gh workflow run "C-Chain Re-Execution Benchmark w/ Container"
--ref es/enable-firewood-dev-workflow
-f test=firewood-101-250k
-f with-dependencies="firewood=v0.0.15,libevm=v1.13.15-0.20251210210615-b8e76562a300"
-f runner=avalanche-avalanchego-runner-2ti
-f timeout-minutes=60

Need to be documented in RELEASES.md?

No

@Elvis339 Elvis339 self-assigned this Dec 3, 2025
Copilot AI review requested due to automatic review settings December 3, 2025 17:43
@Elvis339 Elvis339 requested review from a team and aaronbuchwald as code owners December 3, 2025 17:43
@Elvis339 Elvis339 added the ci This focuses on changes to the CI process label Dec 3, 2025
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR establishes infrastructure for tracking Firewood performance over time by enabling C-Chain reexecution benchmarks with custom Firewood builds. The workflow can be triggered from the Firewood repository with either published versions (for quick testing) or branch/commit references (for comprehensive testing with source builds).

Key changes:

  • Adds a reusable workflow that accepts Firewood version/branch/commit as input
  • Implements intelligent build strategy: uses go get for published versions, builds from source for branches/commits
  • Creates build script for compiling Firewood FFI from source using Nix

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.

File Description
.github/workflows/c-chain-reexecution-benchmark-firewood.yml New workflow that orchestrates benchmark execution with custom Firewood builds and uploads results as artifacts
graft/coreth/scripts/build_firewood.sh Shell script to clone, build, and optionally integrate Firewood FFI from source
.github/workflows/c-chain-reexecution-benchmark-container.yml Removes container configuration (unrelated cleanup)

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@Elvis339 Elvis339 requested a review from maru-ava December 9, 2025 18:10
Elvis339 and others added 4 commits December 9, 2025 22:10
…ution benchmarks

Integrate firewood/libevm dependency overrides into existing workflows
using polyrepo, eliminating the need for a separate firewood workflow.

- Add LIBEVM_REF/FIREWOOD_REF to reexecute-cchain-range-with-copied-data
- Update composite action and workflows to pass inputs
- Remove redundant firewood workflow and build_firewood.sh script
…/avalanchego into es/enable-firewood-dev-workflow
@joshua-kim joshua-kim removed their request for review January 6, 2026 15:25
Copy link
Collaborator

@aaronbuchwald aaronbuchwald left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think that the re-execution benchmark action itself should be responsible for coupling firewood/libevm refs when any workflow invoking this can simply update the dependencies before invoking this action as a step. Separately, the action should not support a special case of archiving data just for Firewood.

These are simple pre- and post- execution steps, which can be easily added as a step before or after the action gets invoked within a workflow.

If Firewood wants to execute this separately, why not just define its own workflow to do so? If it wants to execute it within AvalancheGo, then we should just write the data to the default location using Firewood as the config.

@Elvis339
Copy link
Contributor Author

I do not think that the re-execution benchmark action itself should be responsible for coupling firewood/libevm refs when any workflow invoking this can simply update the dependencies before invoking this action as a step. Separately, the action should not support a special case of archiving data just for Firewood.

These are simple pre- and post- execution steps, which can be easily added as a step before or after the action gets invoked within a workflow.

If Firewood wants to execute this separately, why not just define its own workflow to do so? If it wants to execute it within AvalancheGo, then we should just write the data to the default location using Firewood as the config.

Resolved offline. Root action now:

  • Has no knowledge of firewood/libevm refs (workflow handles setup via task)
  • Pushes to GitHub Action Benchmark (controlled by input)
  • Always uploads artifacts

Dependency setup externalized to task setup-reexecution-deps, called by workflow before action.

Taskfile.yml Outdated

setup-reexecution-deps:
desc: Setup custom deps for reexecution benchmarks. Use LIBEVM_REF and/or FIREWOOD_REF env vars.
cmds:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(No action required) Maybe update the polyrepo script to accept LIBEVM_REF and FIREWOOD_REF so that this task is just providing those env vars to the script?

Copy link
Contributor Author

@Elvis339 Elvis339 Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's so much better, thanks for recommending this.

954121b

Edit: new commit, updated doc and how polyrepo is being ran 1f06a14

runs-on: ${{ matrix.runner }}
steps:
- uses: actions/checkout@v4
- name: Install Nix
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe make nix installation optional as well?

Also, what is the implication of calling this action twice (once here, once in the reexec action)?

Copy link
Contributor Author

@Elvis339 Elvis339 Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Made Nix installation conditional - only runs when libevm-ref or firewood-ref is specified.
Why workflow installs Nix before the action:

  • Polyrepo builds Firewood via nix build
  • The nix develop shell provides Go for go get libevm@

Double install implication: The action's Nix install is idempotent it detects the existing installation and skips:
https://github.com/ava-labs/avalanchego/actions/runs/21261969343/job/61191483101#step:6:53

The run failed but unrelated to Nix - the github-action-benchmark step tries to checkout gh-pages but fails because go.mod/go.sum were modified by the dependency setup.

Edit: fixed e3ab8ad https://github.com/ava-labs/avalanchego/actions/runs/21262297545/job/61192576708

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add a comment documenting the idempotency for the benefit of future maintainers?

with:
name: benchmark-output-${{ inputs.run-id }}-${{ inputs.run-attempt }}-${{ inputs.job }}-${{ inputs.test }}-${{ inputs.runner_type }}
path: ${{ env.BENCHMARK_OUTPUT_FILE }}
retention-days: 30
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe omit this? afaik 30 days is the default

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default is 90 days per GitHub docs.

I'm keeping 30 days explicit because: (1) it documents intent rather than relying on assumed defaults, and (2) 30 days is enough for debugging while avoiding unnecessary artifact accumulation given our run volume.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While documenting intent might be a good principle, what is your intent in this case? We don't set retention days for other uses of this action, what makes this one special?

Copy link
Contributor Author

@Elvis339 Elvis339 Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point. Removing for consistency with other artifact uploads in the repo. 221157a

- name: Upload Benchmark Artifact
uses: actions/upload-artifact@v4
with:
name: benchmark-output-${{ inputs.run-id }}-${{ inputs.run-attempt }}-${{ inputs.job }}-${{ inputs.test }}-${{ inputs.runner_type }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the rational for embedding all these values in the name?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Artifact names must be unique within a workflow run. Matrix jobs run in parallel and would otherwise conflict with a 409 error. These values ensure each matrix combination produces a uniquely named artifact.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The identifiers ensure uniqueness:

  • run-id + run-attempt → unique per workflow execution
  • job → matrix job identifier
  • test + runner_type → differentiates matrix combinations

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Artifacts are already effectively namespaced by PR run (i.e. run-id and run-attempt). That's why we can use names like upgrade-tmpnet-data and not have collisions across PR runs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Simplified the name. Thanks for explaining, the tests indeed pass.

6aef9df


First, set up dependencies using the `setup-reexecution-deps` task, then run the benchmark:

```bash
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(No action required) Maybe include a single example with both versions and document that both or either one can be provided? Same comment for the CI invocation examples.

Copy link
Contributor Author

@Elvis339 Elvis339 Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

d9b2717

Edit: new commit, updated doc and how polyrepo is being ran 1f06a14

./scripts/run_task.sh test-cchain-reexecution -- firewood-101-250k
```

### How It Works
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(No action required) Maybe focus this doc additions on usage rather than duplicating implementation details?

Copy link
Contributor Author

@Elvis339 Elvis339 Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

d9b2717

Edit: new commit, updated doc and how polyrepo is being ran 1f06a14

- Add LIBEVM_REF and FIREWOOD_REF env var handling to run_polyrepo.sh
- Remove setup-reexecution-deps task from Taskfile.yml
- Update CI workflows to call run_polyrepo.sh directly
- Delete setup_reexecution_deps.sh (no longer needed)
…g permission issue by redirecting Nix's temp directory to /tmp which has proper permissions on ARC
- Remove FIREWOOD_REF env var handling from run_polyrepo.sh
- LIBEVM_REF remains as env var (needs go get)
- Firewood passed as polyrepo arg: sync firewood@<ref>
- Clean up workflow steps with inline conditional
- Update README with new usage patterns
The github-action-benchmark step was running even when
push-github-action-benchmark was false, causing failures when
custom deps modify go.mod/go.sum (can't checkout gh-pages with
uncommitted changes).

Now the step only runs when comparison is actually needed.
- Comparison step conditional on push-github-action-benchmark
- Default true preserves existing behavior
- Prevents gh-pages checkout failure when go.mod/go.sum are dirty (i.e., polyrepo firewood setup)
When custom dependencies (firewood-ref or libevm-ref) are used,
go.mod/go.sum are modified. This causes github-action-benchmark
to fail when checking out gh-pages.

New input skips comparison step only when custom deps are present,
preserving summary display for normal PR runs.
push-github-action-benchmark:
description: 'Whether to push the benchmark result to GitHub.'
required: true
skip-benchmark-comparison:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When custom dependencies (firewood-ref or libevm-ref) are used,
go.mod/go.sum are modified. This causes github-action-benchmark
to fail when checking out gh-pages.

New input skips comparison step only when custom deps are present,
preserving summary display for normal PR runs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to make this skip automatic with an appropriate warning if the tree is dirty?

push-github-action-benchmark:
description: 'Whether to push the benchmark result to GitHub.'
required: true
skip-benchmark-comparison:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to make this skip automatic with an appropriate warning if the tree is dirty?

runs-on: ${{ matrix.runner }}
steps:
- uses: actions/checkout@v4
- name: Install Nix
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add a comment documenting the idempotency for the benefit of future maintainers?

desc: Runs shellcheck to check sanity of shell scripts
cmd: ./scripts/shellcheck.sh

polyrepo:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use an action name e.g. run-polyrepo.

# ./scripts/run_polyrepo.sh [polyrepo args...]
#
# Environment variables (optional):
# LIBEVM_REF - Git ref for libevm (runs: go get && go mod tidy)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe use env vars for both args so that the caller has a consistent way of specifying both values?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci This focuses on changes to the CI process

Projects

Status: In Progress 🏗️

Development

Successfully merging this pull request may close these issues.

5 participants